Academic Open Internet Journal ISSN 1311-4360
|
Volume 23, 2008
|
Measuring
Quality Attributes of Web-based Applications
Part-I:
Assessment and Design
E-mail: rsdhawan@rediffmail.com
E-mail:
rsagwal@rediffmail.com
Abstract: This
paper has been designed to predict the Web metrics for evaluating the
reliability and maintainability of hyperdocuments. In the age of Information and
Communication Technology (ICT), Web and the Internet, have brought significant
changes in Information Technology (IT) and their related scenarios. Therefore in
this paper an attempt has been made to trace out the Web-based measurements
towards the creation of efficient Web centric applications. The dramatic
increase in Web site development and their relative usage has led to the need
for predicting Web-based metrics. These metrics will accurately assess the
effort in a Web-based application. Here we carried out an empirical study with
the students of an advanced university class and Web designers that use various
client-server based Web technologies to design Web-based applications
for
predicting the hypermedia design model. Our first goal was to compare the
relative importance of each design activity. Second, we tried to assess the
accuracy of a priori design effort predictions and the influence of some factors
on the effort needed for each design activity. Third, we also studied the
quality of the designs obtained based on construction of a User Behavior Model
Graph (UBMH) to capture the dynamics involved in user behavior, which is
discussed in part-II of this paper. The results obtained from the assessment can
help us to systematically identify the effort assessment and failure points in
Web systems and makes the evaluation of reliability of these systems simple. The
theoretical aspects and designing have been described in part-I. Part-II
describes the variation of the effort estimation with the help of analysis and
implementation models of Web-based applications.
Keywords:
Empirical
software engineering, Web-based effort estimation, Web-based design, Web
metrics.
Web-based
application is essentially a client-server system, which combines traditional
effort logic and functionality, usually server based, with the hypermedia
navigation and data entry facilities provided by browser technology running on
the client. Through a client Web browser, users are able to perform business
operations and then to change the state of business data on the server. The
range of Web-based applications varies enormously, from simple Web sites (Static
Web sites) that are essentially hypertext document presentation applications, to
sophisticated high volume e-commerce applications often involving supply,
ordering, payment, tracking and delivery of goods or the provision of services
(i.e. Dynamic and Active Web sites). We have focused on the implementation and
comparison of effort measurement models for Web-based hypermedia applications
based on implementation phase of development life cycle. For this work, we
studied various size measures at different points in the development life cycle
of Web-based systems, to estimate effort, and these have been compared based on
several predictions. The main objective of design metrics is to provide basic
feedback of the design being measured. There may be general lack of objects,
polymorphism, inheritance, high level of coupling between the object classes or
an uneven behaviour amongst the classes. The major aim of this paper is to
demonstrate an approach, which appears promising in terms of producing a direct
assessment of Web-based metrics results. Web-based systems consist of the Web
objects (i.e. Web documents, Images, Pictures, Sounds, Scripts, Active X
Control). But developing good Web-based application is expensive, mostly in
terms of time and degree of difficulty for the Web designers. In the present
scenario, the companies developing Web-based systems face the problems and
challenges of estimating the required development effort in a fixed time frame.
This problem does not have a standard solution yet. On the other hand, effort
estimation models that have been used for many years in traditional software
development are not very accurate for Web-based software development effort
estimation [1]. Moreover, the rapid evolution and growth of Web related
technology, tools and methodologies makes historical information and effort
estimation models of software engineering quickly obsolete. Like any other
software process, Web application development would benefit from early stage
effort estimation. Keeping the above-mentioned requirements in mind, we have
been engaged in a number of activities involving study of Web-based systems by
using practical and theoretical analysis. Efforts have been made to understand
the design specifications, and develop the corresponding Web-based modules by
using client-server technologies of Web design [2] [3] [4] [5] [6]. The modules
have been designed to work in client-server and stand-alone modes under
different environment.
2. Literature
review
Literature
shows that over the years several techniques for estimating development effort
have been suggested in the
software development life cycle, and have been compared based on several
predictions. Over the years, measurement of the quality of software products has
been fraught with difficulties. The first is related to the nature of software,
which is a complex intellectual artifact developed through successive iterations
and phases until deployed and used in a variety of organizational and industrial
contexts. A second difficulty arises from the variety of quality models (such as
the ones proposed by Boehm, McCall and Dromey [7-9]), which have not received
subsequent continuous investment to generate the corresponding measurement
instruments. A third difficulty is the lack of tools in the software engineering
community to represent quantitatively viewpoints, while at the same time keeping
track of the value of individual views on quality. Finally, there is a fourth
difficulty, which is related to the largely paper-based measurement process.
This is found by the software engineers working in highly tool-based
environments, given that these engineers are culturally declined to such
paper-based processes. In literature there are only a few examples of effort
estimation models for Web-based development, as most work proposes methods and
tools as a basis for process improvement and higher product quality. Several
development effort estimation methods have been proposed from the beginning of
software engineering as a research area. These methods are classified as
traditional software versus Web-oriented software. The traditional effort
estimation methods are those used to estimate the development effort of software
that consists of programs in a programming language, which eventually interact
with data files stored in the database. On the other hand, the Web-based methods
use different metrics and they are focused on estimating the development effort
of products. These products generally involve code in a Web-based client server
technologies, imagery, look-and-feel, information structure, navigation and
multimedia objects. Traditional effort estimation methods like COCOMO [10] are
mainly based on metrics like Lines Of Code (LOC) [11] or Function Points (FP)
[12]. The estimations supported by LOCs have shown several problems. Most
working Web projects agree that LOCs are not suitable for early estimation
because they are based on design [13]. Other reported problem is that the work
involved in the development of multimedia objects and look-and-feel cannot be
measured in LOCs. Also, an important amount of reliable historical information
is needed to estimate effort using this metric, which reduces the capability to
get reliable fast estimations. Speed is an important requisite for Web-based
applications. Similarly, traditional estimation methods based on FPs are not
appropriate because FPs don’t consider the imagery, navigation design,
look-and-feel, and multimedia objects, among others. Then, Boehm proposed COCOMO
II, which could use alternatively LOCs, FPs or Object Points [14]. Although
COCOMO II was not defined to support the development effort estimation of Web
applications, many people found the way to adapt the object point concept in
order to get a sizing estimation [15]. The Object Points and COCOMO II seem to
be acceptable for traditional software projects, but they are not good enough to
get accurate effort estimations for Web-based information systems. The
complexity of the estimation process and need for historical information make
them difficult to apply in the Web-based applications. Several size metrics has
been proposed for Web-based applications, like Object Points, Application Points
and Multimedia Points [16]. However, the most appropriate seems to be the
Halstead’s equation [12], and it is used to calculate the volume or size of Web
Objects in term of operands and operators. The combination of COCOMO II and Web
Objects is, in this moment, the most appropriate method to estimate the
development effort of Web applications. However, this combination is not
feasible to estimate the development effort of small to large size projects,
which require fast estimation with little historical
information.
2.1 Web-based measurements- in
perspective
By now,
a very little research work has been carried out in the area of effort
estimations and assessment techniques for Web systems. Software effort
estimation and assessment is crucial for techniques for high volume Web systems
based hypermedia applications, where failures can result in loss of revenue and
dissatisfied customers. Web systems based hypermedia applications have led to
the emergence of new e-commerce models, which mandate a very high reliability
and availability requirements. Companies developing Web-based systems face the
challenge of estimating the required development effort in a very short time
frame. This problem does not have a standard solution yet. On the other hand,
effort estimation models that have been used for many years in traditional
software development are not very accurate for Web-based software development
effort estimation [1]. Web-based projects are naturally short and intensive
[17], so not having an appropriate effort estimation model pushes developers to
make highly risky estimations. Moreover, the rapid evolution and growth of Web
related technology, tools and methodologies makes historical information quickly
obsolete. Nelson et al [18] compute the reliability number by using the ratio of
the number of page errors, to the total number of hits to the page for a test
case. The computation is based on a static representation of the web pages and
does not consider users behavior. There are some commercial tools such as
LogExpert, Analog [19] that gives the statistics like page time, pages accessed,
referrer details, etc. but does not give comprehensive session level information
such as session length, mix of sessions, session count, probability of
navigation from one page to another etc. Thus, they do not report the dynamic
aspects of user’s navigation. Wang et al [20] have proposed the construction of
tree-view of the user’s navigational flow by considering Web server log files as
input, and use referrer-id field to derive the result, which represents a
dynamic form of input domain model [21]. Here, the construction of the tree-view
is calculated through depth-first traversal algorithm taking log files as input.
The model represents a system where users avoid re-traversal by remembering the
pages traversed before. Menasce et al [22] have proposed a state transition
graph to capture the behavior of users for Web workload characterization.
Technique proposed by Sengupta [23] considers the navigational flow as input,
and uses client-id field in the
access log to identify the unique sessions, probabilities associated with
occurrence of each session, and the page-level transition probabilities in the
session. The reliability is computed by failure data analysis using metrics such
as the Mean Time between Failures (MTBF), Mean Time to Fail (MTTF) and
Reliability number [24]. In Web systems, the data for failure analysis has
primarily been captured from the access logs that have HTTP returns error code
of 4xx and 5xx only for the valid sessions.
3.
Development of Web-based
application
The
unique nature of Web-based applications broadens the role of traditional project
management and adds a new dimension to the software development process.
Web-based applications often contain significant multimedia content (images,
movie clips, sound clips and text) requiring specialist resources for their
development. In Web-based application development the participation and
contribution of Web designers, programmers, analysts, managers, domain experts
etc. plays an important role. For the purposes of estimating software
development effort, multimedia content is assumed to exist and the effort
required for their production is outside the scope of the software engineering
process. However, the effort of integrating these elements needs to taken into
account. The development of web-based applications includes the various
Web-based client-server technologies: presentation (e.g. HTML, DHTML, XML);
scripting and programming languages (e.g. PHP, Coldfusion, VBScript, JavaScript,
PERL ActiveX, Servlets, Java, AJAX, Python); network protocols and distributed
computing technologies (e.g. DOM, DCOM, HTTP, FTP, TCP/IP, CORBA, RMI, JNI,
JavaBeans). In Web-based applications the unique and dominant technology is
HTML, (HyperText Markup Language), and more recently Dynamic HTML and XML
(Extensible Markup Language), used to construct Web pages. Web pages may or may
not include scripts, modules, multimedia, or text content, but almost always
comprise a proportion of HTML or DHTML, which specifies how a (client) Web page
should be designed in a browser.
4. Software metrics and effort estimation
Reliable
software effort estimation and assessment is critical for project selection,
project planning and project control. Software metrics are quantifiable measures
that are used to measure specific attributes of a software system, software
development resources, and/or the software development process. Software metrics
are designed to give you a view of your software from some perspective- such as
performance, design or maintainability. Software is measured (i) to indicate the
quality of product, (ii) to assess the productivity of the people who produce
the product, (iii) to assess the benefit derived from new software engineering
tools and methods, (iv) to form a baseline for estimation, and (v) help to
justify requests for new tools or training. A number of metrics have been
proposed to quantify parameters like size, complexity, and reliability of
software products. In reality, software metrics include much more than primitive
measures of program size. Software metrics include calculations based on
measurements of any or all components of software development phase. Metrics
help us understand the technical process that is used to develop a product. The
process is measured to improve it and the product is measured to increase
quality. The following areas of software development can benefit from a
well-planned metrics program: (i) Project management, (ii) Product quality,
(iii) Product performance, (iv) Development process, and (v) Cost and schedule
estimation. There are three types of metrics employed for software development-
(i) Product level metrics: These metrics are used to quantify characteristics of
software product being developed. In general product metrics describe the
characteristics of product such as size, complexity, design features,
performance, and quality level, (ii) Process level metrics: These metrics are
used to quantify characteristics of the environment or the process being
employed to develop software. In general process metrics can be used to improve
software development and maintenance. Examples include the effectiveness of
defect removal during development, the pattern of testing defect arrival, and
the response time of the fix process. Process metrics are further divided into
following metrics: (a) Life cycle metrics, (b) Management metrics, and (c)
Maturity metrics. The Life cycle metrics are further classified as Problem
definition metrics, Requirement analysis and specification metrics, Design
metrics, Implementation metrics, & maintenance metrics [25-32]. The
Management metrics are further classified as Project Management metrics
(Milestone Metrics, Risk metrics, Workflow metrics, Controlling metrics, and
Management database metrics), Quality Management metrics (Customer satisfaction
metrics, Review metrics, Productivity metrics, Efficiency metrics, Quality
assurance metrics, and Configuration Management metrics (Change control metrics,
Version control metrics) [33-43]. The Maturity metrics are further classified as
Organization metrics, Resource, personnel and raining metrics, Technology
management metrics, Documented standards metrics, Process metrics, Data
management and analysis metrics [44-49], and (iii) Project level metrics: These
metrics describe the project characteristics and execution. Examples include the
number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity. Some metrics belong to multiple
categories. For example, the in-process quality metrics of a project are both
process metrics and project metrics. Project metrics can divide further in
following metrics: (a) Structure
metrics, (b) Quality metrics, (c) Size metrics, (d) Architecture metrics, and
(e) Complexity metrics. The Structure metrics are further classified as
Component characteristics, Structure characteristics, and Psychological rules
metrics [50-51]. The Quality metrics are further classified as Functional
metrics, Reliability metrics, Usability metrics, Efficiency, Maintainability
metrics, and Portability metrics [52-54]. The Size metrics are further
classified as Number of elements, Development metrics, and Size of components
[51], [55]. The Architecture metrics are further classified as components
metrics, Architecture characteristics and Architecture standard metrics [51],
[56]. The Complexity metrics are further classified as Computational complexity
metrics, and Psychological complexity metrics [57-58]. Besides these there are
some software metrics (Performance metrics, paradigm metrics, and Replacement
metrics) and Personnel metrics (Programming experience metrics, Communication
level metrics, Productivity metrics, and Team structure metrics
[58-62].
5.
Support for defining Web-based effort measurement
The
frameworks will be based on a series of checklists that a Web-based project uses
simultaneously for issue identification and measurement definition as follows
[63]: (i) The size
checklist: This checklist can be used to help define counts of physical and
logical source lines of code in a Web-based application. There are attributes to
separate development status (e.g., estimated or planned) and also for separating
the data by language. Other attributes include: statement type, origin, usage,
delivery, functionality, and replications, (ii) The effort measures checklist:
This checklist can be used to help define counts of physical and logical source
lines of code. There are attributes for: type of labor, hour information,
employment class, labor class, activity, and product-level functions, (iii) The
problem count checklist: this checklist can be used to help define counts of
defects and enhancements regarding software products. There are attributes for:
problem status, problem type, uniqueness, criticality, urgency, and finding
attribute: e.g., design, code, inspections, reviews, or testing, (iv) The
schedule measures checklist: This checklists allow the project to determine what
milestones and deliverables it. In the schedule checklists there are
capabilities to separate the items to be tracked by builds and overall. For each
item to be tracked there is the capability to define exit criteria on that item
and to further decompose that item to track key events regarding that item, for
example, when sign-off by the user, management, and/or quality assurance is
obtained. As part of the schedule series of checklists, the ability to track
schedule and progress by counting completed work units is also included, and (v)
Implementation issues: This issue translates design specifications into source
code. The primary goal of implementation is to write source code and internal
documentation so that conformance of the code to its specifications can easily
be verified, and so that debugging, testing, and modification are eased. The
implementation team should be provided with a well-defined set of software
requirements, an architectural design specification, and a detailed design
description. In a well-known experiment, Weinberg [64] gave five programmers
five different implementation goals for the same program: minimize the memory
required, maximize output readability, maximize source text readability,
minimize the number of source statements, and minimize development time.
6. General evaluation of size measures for
Web-based designs
Web
designers recognize the importance of realistic estimates of effort to the
successful management of software projects, the Web being no exception.
Estimates are necessary throughout the whole development life cycle. They are
fundamental used to determine a project’s feasibility in terms of cost-benefit
analysis and design, and to manage resources effectively. The Size measures can
be described in terms of length, functionality and complexity is often a major
determinant of estimations. Most
estimates prediction models to date concentrate on functional measures of size,
although length and complexity are also essential aspects of size in order to
analyze the overall effect of Parametric influence (including quantitative
parameters such as size, number of defects, months and qualitative parameters
such as complexity, speed, required reliability, tool usage, and analyst
capability), sensitivity, risk identification, software reuse and COTS
(Commercial-Off-the-Shelf based Systems). So an over all case study evaluation
is required to predicts the size metrics, characterizing length, complexity and
functionality for Web design and estimations [65]. The parametric influence
including both qualitative and quantitative study predicts through a case study
evaluation and hypothetical analysis where a set of proposed or reused size
metrics for estimation prediction will have to be measured [66-69]. To date
there are only a few examples of estimation prediction models for Web
development in the literature as most work proposes methods and tools as a basis
for process improvement and higher product quality.
6.1 Size metrics for web-based design
and estimation
The
Web-based designs and estimation activities should be based upon conceptual
framework for software measurement, which is based on following principles: (a)
Determining relevant measurement goals, (b) Recognizing the entities to be
examined, (c) Identifying the level of maturity the organization has reached,
and (d) Classifying the functionality of metrics through
standardization.
6.2 Factors involved in Web-based
design
(a)
Create Web pages that conform to accepted and published standards for measuring
the HTML, CSS, XML and other Web based specifications. These are much more
likely to be interpreted correctly by the various user agents (browsers) that
exist. Additionally, if style sheets are used, you should conform to number of
measurements including absolute units such as inches, centimeters, points, and
so on, as well as relative measures such as percentage and em units [2]; (b)
Know the difference between structural and presentation elements, use
stylesheets when appropriate. It should be noted, though, that stylesheets
support is not fully implemented on all user agents (browsers); this means that
for at least the near future, some presentation elements in HTML will still be
used and easy to measure. Moreover, if a Web-based design contains the multiple
links and these links are further connected to different stylesheets then
measurements will be more complex. So always try to use a single stylesheets for
an effective Web-based design; (c) The Web-based design should include the rich
Meta-Content about the purpose and function of elements by providing the
valuable tools for giving additional information on the function and meaning of
various tags in the larger scope of your page. It can increase the accessibility
of Web page; (d) Make sure your pages can be navigated by keyboard. It means
Web-based design measurements should also be based on the keyboard navigation;
(e) Provide alternative methods to access non-textual content, including images,
scripts, multimedia tables, forms and frames, for user agents that don’t display
them. The foremost example of this is the “ALT” attribute, of the <IMG>
tag, which allows an author to provide alternative text in case a user agent
can’t display graphics. Accessibility on some Web design can also be measured
and maintained by providing off-line or at least, off-Web methods of doing
things; such as providing an e-mail link or response form, for whatever reason;
(f) Be wary of common pitfalls that can reduce the accessibility, while
measuring the Web design of your site. Examples of these pitfalls include: (i)
Blinking text (ii) Use of ASCII art
(iii) Link names that don’t make sense out of content (iv) Link that aren’t separated by
printable characters (v) Use of
platform dependent scripting; and (g) For effective Web measurements, it is
always better to define the functions in the <HEAD> tag to reduce the
complexity and the efforts required. In this way, security will be improved and
this type of Web-design will be more reliable.
In
general, all the Web measurements performed on the three broad categories of Web
documents, generally simplicity, reliability, and performance (SRP) is tested
and measured. For dynamic Web documents, instead of checking SRP generally cost,
complexity, and speed of retrieval of information is verified. For active Web
documents generally its ability to update information continuously is checked
and measured.
7.
Existing work for Web effort estimation
From the
beginning of software engineering, several development effort estimation methods
have been proposed. We can classify these methods for our research as those for
traditional software and those
for Web-oriented software. The
traditional effort estimation methods are used to estimate the development
effort of software that consists of programs in a programming language, which
eventually interact with data files or databases. On the other hand, the
Web-oriented methods use different metrics and they are focused on estimating
the development effort of products that are event-oriented. These products
generally involve code in a programming language, imagery, look-and-feel,
information structure, navigation and multimedia objects. Several size metrics
have been proposed for Web applications, like Object Points, Application Points
and Multimedia Points [16]. However, the most appropriate seems to be Web
Objects (WO) [1]. WOs are an indirect metric that is based on a predefined
vocabulary that allows defining Web systems components in term of operands and
operators. To estimate the amount of WOs that are part of a Web-based
application it is necessary to identify all the operators and operands present
in the system. Then, they are categorized using a predefined table of Web
Objects predictors and also they are classified in three levels of complexity:
low, average and high. The final amount of WO in a Web-based application is
computed using the Halstead’s equation [12] (see equation 2), and it is known as
the volume or size of the system.
8
Effort =
A ĺ Ci
(Size)P1
Duration = B (Effort)P2
(1)
i=1
Where: A
is effort coefficient, B = duration coefficient, Ci = cost drivers, P1 = effort
power law, and P2 = duration power law.
V = N
log2 (n) = (N1 + N2) log2 (n1 + n2)
(2)
8. The RSWAEA
Method
In order
to deal with the problem of effort estimation we analyzed Web-based software
development processes, related to the development of small and medium size
Web-based information systems. Based on the analysis of these results, we
identified a low usability of the well-known effort estimation methods and a
necessity of a model to support estimation in such scenario. Due to this, we
developed a method for fast estimating the Web-based software development effort
and duration, which will definitely be adapted by the software community for the
development of Web-based hyper media applications. We called it RS Web
Application Effort Assessment (RSWAEA) method. The method will be very useful to
estimate the development effort of small to large-size Web-based information
systems. The DWOs
(Data Web Objects) are an approximation of the
whole size of the project; so, it is necessary to know what portion of the whole
system DWOs represent. This knowledge is achieved through a relatively simple
process (briefly described in Part-II of this paper). Assuming that the
estimation factors in the computation of the effort are subjective, flexible and
adjustable for each project, the role of the expert becomes very relevant. Once
the value of the portion or representativeness is calculated, the expert can
adjust the total number of DWOs and he/she can calculate the development effort
using the following equation.
8
E= (DWO .
(1+X*))P . CU . P cdi
(3)
i=1
Where:
E is the development effort measured in man-hours, CU is the cost of user,
cdi is the cost drivers,
DWO corresponds to the Web application size in terms of data web objects, X* is
the coefficient of DWO representativeness, and P is a constant. The
estimated value of real data web objects (DWO*) is calculated as the product of
the initial DWOs and the representativeness coefficient X*. This
coefficient is a historical value that indicates the portion of the final
product functionality that cannot be inferred from the system data model. The
process of defining such coefficient is presented in the next section. The cost
of user is the values between 0 and 5. A value of CU of 0 means the system
reuses all the functionality associated with each user type; so, the development
effort will also be zero. On the other hand, if the cost of user is five, this
means that there is no reuse of any kind to implement the system functionality
for each user type. It represents the system functionality that is associated
with each user type. The defined cost drivers (cdi) are defined in
part- II of this paper) and they are similar to those defined by Reifer for
WebMo [13]. The last adjustable coefficient in RSWAEA corresponds to constant P
that is the exponent value of the DWO*. This exponent is a value very close to
1.01, and it must neither be higher than 1.12 nor lower than 0.99. This
constant’s value depends on the project size measured in DWOs. In order to
determine this value, various statistical analyses have been done on various
Web-based applications. As a result, this constant was assigned the value 1.09
for projects smaller than 300 DWOs, and 1.03 for projects larger than 300 DWOs.
9.
Measurements and validations
On the
basis of above mentioned Web-based Designs and projected measurements
techniques, the following things can be examined and calculated for predicting
the efficient Web-based design: (a)
Identification of measures that can be used to predict the efforts for
Web-design; (b) Identification of a model that can be used to predict the
efforts required using above-said measures; and (c) Identification of a
methodology that can help the Webmasters in controlling the efforts for
Web-design. These three things will be discussed and implemented in the part-II
of this paper.
Our
major aim is to bring light to this issue by identifying size metrics, features,
functions, and cost drivers for early Web cost estimation based on current
practices of several Web pages worldwide. This has been achieved using surveys
(based upon hypothetical and Web companies based analysis). These proposed
Web-based measurements techniques would be organized into their categories and
rankings. On the basis of these, the above metrics and methods have been
proposed to fast estimate the development effort of Web-based information
systems. The proposed method use raw historical information about development
capability and high granularity information about the system to be developed, in
order to carry out such estimations. Generally, these estimations are the basis
of the budget given to the client. Based on such budget the software development
companies sign contracts with the client. Without an appropriate model, cost
estimation is done with a high uncertainty and the development effort estimation
relies only on the experience of an expert, whose estimations are generally not
formally documented. At last, based on the basis of these results, we will
identify a low usability of the well-known effort estimation methods and a
necessity of a model to support estimation in such scenario. Due that, we will
develop an embedded method by using the above-mentioned metrics and methods for
fast estimating the Web-based software development effort and duration, which
will be adapted to development of Web-based projects. These methods will
specifically applicable to estimate the development effort of small, medium, or
larger-size Web-based information systems in immature development scenarios.
Furthermore, for Web-based software effort estimations and measurements, the
compatibility, usability, maintainability, complexity, cost, configuration, time
requirements, types of interfaces, tractability, and type of nature of Web
design would also be examined and considered. Finally, we will validate this
study on the basis of Web-based measurements by taking the features and
functionality of the application to be developed for effort predictions in order
to propose efficient Web-based measurements. The task size and consequences of
estimation errors will be predicted. However, positive results would suggest
that the various efforts applied to estimate Web-based applications, would be an
invincible task for the upcoming future. Generally, the developers spend time
trying to estimate the software development effort realistically and reliably,
they usually have very little time for this task and very little historical
information is available. These characteristics tend to make estimations less
reliable regarding both time and cost. An expert knows the development scenario
and the development capabilities of his/her organization, but he/she generally
does not have good tools to support an accurate, reliable and fast estimation.
In order to get fast and reliable effort estimations
of Web-based applications, the part-II of this paper presented the
RSWAEA
method and UBMH to
scientifically identify the effort assessment, estimation and failure points in
Web systems.
10.
Acknowledgments
A major
part of the research reported in this paper is carried out at U.I.E.T and
D.C.S.A, K.U.K, Haryana, India. We are highly indebted
and credited by gracious help from the Ernet section of K.U.K for their constant
support. The authors would like to thank those nameless individuals who worked
hard to supply the data.
11.
References
[1]
D.J.
Reifer, Web Development: Estimating Quick–to-Market Software, IEEE Software,
Vol. 17, No. 6, Pages 57 - 64, 2000.
[2]
Thomas
A. Powell, The Complete Reference
HTML & XHTML, Fourth Edition (Tata Mcgraw Hill (TMH) New Delhi,
2005).
[3]
H. M.
Deitel, P. J. Deitel & A. B. Goldberg, Internet & World Wide Web How to
Program (Pearson Prentice-Hall, New Delhi, India, Third Edition 2006)
[4]
Michael
J. Young, Step by Step XML (Prentice-Hall, New Delhi, 2006)
[5]
Ivan
Bayross, SQL, PL/SQL, The
Programming Language of Oracle (Techmedia Publications, New Delhi, 2000).
[6]
Harris, Java Script Programming (Prentice-Hall, New Delhi, India,
2005).
[7]
Boehm,
B.W., J.R. Brown, J.R. Kaspar, M. Lipow & G. Maccleod, Characteristics of Software
Quality (Amsterdam: North
Holland. 1978).
[8]
Mccall,
J.A., P.K. Richards & G.F. Walters, Factors in Software Quality, vol. 1,2, AD/A-049-014/015/05, and
Springfield, VA: National Technical Information Service, 1977.
[9]
Dromey,
R.Geoff, Concerning the Chimera,
IEEE Software, Vol. 13, No. 1, January, p. 33-34, 1996.
[10] B.
Boehm, Software Engineering Economics (Prentice-Hall, January 1982).
[11] D.
Phillips, The Software Project Manager’s Handbook, (IEEE Computer Society Press,
1998).
[12] International
Function Point Users Group, Function Point Counting Practices Manual, Release
4.0, URL: http://www.ifpug.org/publications/manual.htm,
1994.
[13] D.J.
Reifer,
Web Development: Estimating Quick–To-Market Software, IEEE Software, Vol. 17, no. 6, pages 57 - 64,
November-Dec. 2000.
[14] B.
Boehm, Anchoring the Software Process, IEEE Software, Vl. 13, no. 4, pages 73
-82, July 1996.
[15] B.
Boehm, E. Horowitz, R. Madachy, D. Reifer, B. K. Clark, B. Steece, A. Winsor
Brown, S. Chulani and C. Abts, Software Cost Estimation in COCOMO II
(Prentice-Hall, 1st Edition, January 2000).
[16] J. C.
Cowderoy, Size And Quality Measures for Multimedia and Web-Site Production,
Proc. of the 14th International Forum on COCOMO and Software Cost Modeling, Los
Angeles, CA, October 1999.
[17] D. Lowe,
Web Engineering or Web Gardening?, WebNet Journal, Vol. 1, No. 1 January-March
1999.
[18] E.
Nelson, Estimating Software Reliability from Test Data, Microelectronics and
Reliability, 17(1), pp. 67-73, 1978.
[19] Advanced
tools intended to produce individual page statistics available at
http://www.analog.cx;
http://www.weblogexpert.com etc.
[20] Wen-Li
Wang, Mei-Huei Tang, User-Oriented Reliability Modeling for a Web System, 14th
International Symposium on Software Reliability Engineering (ISSRE), November
17-21, 2003.
[21] University
of Maryland, NASA High Dependability Computing Program,
http://www.cebase.org/hdcp/frames.html?/hdcp/models/input_domain_models.html.
[22] D.A.
Menace, V.A.F. Almeida, R. Fonseca, M.A. Mendes, A Methodology for Workload
Characterization of E-Commerce Sites, Proceedings of the 1st ACM Conference on
Electronic Commerce, 1999.
[23] Shubhashis
Sengupta, Characterizing Web Workloads– a Transaction-Oriented View, IEEE/ IFIP
5th International Workshop on Distributed Computing (IWDC- 2003).
[24] JohnD.
Musa, Anthony Iannino, Kazuhira Okumoto, Software Reliability (Mcgraw-Hill, pp.
18, 1987).
[25] Arthur,
L.J., Rapid Evolutionary Development , John Wiley & Sons, 1992.
[26] Arnold,
R.S., Software Reengineering IEEE Computer Society Press, 1994.
[27] Dyer,
M., “The Cleanroom Approach to Quality Software Development, John Wiley &
Sons, 1992.
[28] Fenton,
N., Software Metrics- A Rigorous Approach, Chapman & Hall, 1991.
[29] IEEE
STD. 1074-1991, Standard for Developing Software Life Cycle Processes, IEEE
Computer Society, 1992.
[30] Oman,
P.W., Lewis, T. G., Milestones in Software Engineering, IEEE Computer Press,
1990
[31] Ptunam,
L.H., Myers, W., Measures for Excellence, Yourdon Press, 1992.
[32] Thayer,R.H.,
Dorfan, M., Systems and Software Requirements Engineering, IEEE Computer Society
Press, 1990.
[33] Boehm,
B.W., Software Risk Management, IEEE Computer Society Press, 1989.
[34] Brinkworth,
J.W.O., Software Quality Management, Prentice-Hall, 1992.
[35] Ebenau,
R.G., Strauss, S.H. Software Inspection Process, Mcgraw-Hill, 1994.
[36] Gilb,
T., Principles of Software Engineering Management, Prentice-Hall, 1994.
[37] Godart,
C., Charoy, F., Databases Foe Software Engineering”, Prentice-Hall, 1994.
[38] Hayes,
B.E., Measuring Customer Satisfaction, ASQC Quality Press, Milwaukee, 1992.
[39] Hollocker,
C.P., Software Reviews and Audits Handbook, John Wiley & Sons, 1990.
[40] Jones,
C., Applied Software Measurement, Mc-Graw-Hill, 1991.
[41] Jones,
C., Assessment and Control of Software Risks, Yourdon Press, 1994.
[42] Kan,
S.H., Metrics and Models in Software Quality Engineering, Addison-Wesley,
95.
[43] Redman,
Data Quality, Prentice-Hall, 1992.
[44] Arthur,
L.J., Improving Software Quality-An Insider’s Guide to TQM, Evolutionary
Development, John Wiley & Sons, 1993.
[45] Application
of Metrics In Industry- A Quantitative Approach to Software Management, CSSE,
London, 1993.
[46] Bache,
R., Bazzana, G., Software Metrics for Product Assessment, Mcgraw-Hill,
1994.
[47] Grady,
Robert B., Practical Software Metrics for Project Management and Process
Improvement, Englewood Cliffs, N.J. PTR, Prentice-Hall, 1992.
[48] Humphrey,
W.S., Managing the Software Process, Prentice-Hall, 1990.
[49] Schmauch,
C.H., ISO 9000 for Software Developers, ASQC Quality Press, Milwaukee,
1994.
[50] Conger,
S., The New Software Engineering“, Int. Thomson Publ., 1994
[51] Marciniak,
J.J., Encyclopedia of Software Engineering, John Wiley & Sons, Vol. II,
94.
[52] Evans,
M.W., Marciniak, J., Software Quality Assurance and Management, John Wiley &
Sons, 1987.
[53] ISO/IEC
9126, International Standard ISO/IEC 9126.1991(e) for Software Product
Evaluation, Geneva, 1992.
[54] Lyu,
M.R., Handbook of Software Reliability Engineering, IEEE Computer Society Press,
1995.
[55] Bache,
R., Bazzana, G., Software Metrics for Product Assessment, Mcgraw-Hill,
1992.
[56] Norris
et al., The Healthy Software Project, John Wiley & Sons, 1993.
[57] Zuse,
H., Software Complexity-Measures and Methods, De Gruyter Publ., 1991.
[58] Johnson,
J.R., The Software Factory-Managing Software Development and Maintenance, QED
Information Science Publ. 1991.
[59] Hetzel,
B., Making Software Measurement Work, John Wiley & Sons, 1993.
[60] Greenberg,
S., Computer-Supported Cooperative Work and Groupware, Academic Press,
1991.
[61] NASA,
Software Measurement Guidebook, Maryland, 1995.
[62] Pressman,
R. S., Software-Schock-The Danger & The Opportunity, DHP,
1991.
[63] James A.
Rozum, Defining and Understanding Software MeasurementData,
http://www.sei.cmu.edu/pub/documents/articles/pdf/using.data.asm.pdf
[64] Richard
Fairley, “Software Engineering Concepts”, (TMH- New Delhi),
2004.
[65] Pfleeger,
S. L., Jeffery, R., Curtis, B., Kitchenham, B, Status Report On Software
Measurement, IEEE Software, March/April, 1997.
[66] Fenton,
N. E., Pfleeger, S. L., Software Metrics, A Rigorous & Practical Approach,
2nd Edition, (PWS Publishing Company and International Thomson Computer Press,
1997).
[67] Hatzimanikatis,
E., Tsalidis,C. T., Christodoulakis, D., Measuring The Readability and
Maintainability of Hyperdocuments, Journal of Software Maintenance, Research and
Practice, 1995, 7, pp. 77-90.
[68] Warren,
P., Boldyreff, C., Munro, M., The Evolution of Websites, Proc. Seventh
International Workshop on Program Comprehension, IEEE Computer Society Press,
Los Alamitos, Calif., 1999, pp. 178-185.
[69] Mcdonell,
S. G., Fletcher, T., Metric Selection for Effort Assessment in Multimedia
Systems Development, Proc. Metrics’98, 1998.
About
the authors:
Sanjeev
Dhawan
is Lecturer in Computer Science & Engineering at the Kurukshetra University,
Kurukshetra, Haryana. He has done his postgraduates degrees in Master
of Science (M. Sc.) in Electronics Master of Technology (M. Tech.) in Computer
Science & Engineering, and Master of Computer Applications (M.C.A) from the
Kurukshetra
University. At present he is pursuing PhD in Computer Science from Kurukshetra
University. His current research interests include web engineering, advanced
computer architectures, Intel microprocessors, programming languages and
bio-molecular level computing.
Rakesh
Kumar
received his PhD in Computer Science and M.C.A from Kurukshetra University,
Kurukshetra, Haryana. He is currently Senior Lecturer at the Department of Computer Science &
Application, Kurukshetra University. His current research focuses on
programming languages, information retrieval systems, software engineering,
artificial intelligence, and compilers design.
Technical College - Bourgas,
All rights reserved, © March, 2000